本文介绍了有组织的第二次共同19号比赛的基线方法,该方法发生在欧洲计算机视觉会议(ECCV 2022)的Aimia研讨会框架内。它提出了COV19-CT-DB数据库,该数据库为COVID-19 DENCTICT注释,由约7,700 3-D CT扫描组成。通过四个COVID-19严重性条件,进一步注释了由COVID-19案例组成的数据库的一部分。我们已经在培训,验证和测试数据集中划分了数据库和后期。前两个数据集用于培训和验证机器学习模型,而后者将用于评估开发模型。基线方法由基于CNN-RNN网络的深度学习方法组成,并报告其在COVID19-CT-DB数据库上的性能。
translated by 谷歌翻译
Merging satellite products and ground-based measurements is often required for obtaining precipitation datasets that simultaneously cover large regions with high density and are more accurate than pure satellite precipitation products. Machine and statistical learning regression algorithms are regularly utilized in this endeavour. At the same time, tree-based ensemble algorithms for regression are adopted in various fields for solving algorithmic problems with high accuracy and low computational cost. The latter can constitute a crucial factor for selecting algorithms for satellite precipitation product correction at the daily and finer time scales, where the size of the datasets is particularly large. Still, information on which tree-based ensemble algorithm to select in such a case for the contiguous United States (US) is missing from the literature. In this work, we conduct an extensive comparison between three tree-based ensemble algorithms, specifically random forests, gradient boosting machines (gbm) and extreme gradient boosting (XGBoost), in the context of interest. We use daily data from the PERSIANN (Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks) and the IMERG (Integrated Multi-satellitE Retrievals for GPM) gridded datasets. We also use earth-observed precipitation data from the Global Historical Climatology Network daily (GHCNd) database. The experiments refer to the entire contiguous US and additionally include the application of the linear regression algorithm for benchmarking purposes. The results suggest that XGBoost is the best-performing tree-based ensemble algorithm among those compared. They also suggest that IMERG is more useful than PERSIANN in the context investigated.
translated by 谷歌翻译
Being able to forecast the popularity of new garment designs is very important in an industry as fast paced as fashion, both in terms of profitability and reducing the problem of unsold inventory. Here, we attempt to address this task in order to provide informative forecasts to fashion designers within a virtual reality designer application that will allow them to fine tune their creations based on current consumer preferences within an interactive and immersive environment. To achieve this we have to deal with the following central challenges: (1) the proposed method should not hinder the creative process and thus it has to rely only on the garment's visual characteristics, (2) the new garment lacks historical data from which to extrapolate their future popularity and (3) fashion trends in general are highly dynamical. To this end, we develop a computer vision pipeline fine tuned on fashion imagery in order to extract relevant visual features along with the category and attributes of the garment. We propose a hierarchical label sharing (HLS) pipeline for automatically capturing hierarchical relations among fashion categories and attributes. Moreover, we propose MuQAR, a Multimodal Quasi-AutoRegressive neural network that forecasts the popularity of new garments by combining their visual features and categorical features while an autoregressive neural network is modelling the popularity time series of the garment's category and attributes. Both the proposed HLS and MuQAR prove capable of surpassing the current state-of-the-art in key benchmark datasets, DeepFashion for image classification and VISUELLE for new garment sales forecasting.
translated by 谷歌翻译
Although many machine learning methods, especially from the field of deep learning, have been instrumental in addressing challenges within robotic applications, we cannot take full advantage of such methods before these can provide performance and safety guarantees. The lack of trust that impedes the use of these methods mainly stems from a lack of human understanding of what exactly machine learning models have learned, and how robust their behaviour is. This is the problem the field of explainable artificial intelligence aims to solve. Based on insights from the social sciences, we know that humans prefer contrastive explanations, i.e.\ explanations answering the hypothetical question "what if?". In this paper, we show that linear model trees are capable of producing answers to such questions, so-called counterfactual explanations, for robotic systems, including in the case of multiple, continuous inputs and outputs. We demonstrate the use of this method to produce counterfactual explanations for two robotic applications. Additionally, we explore the issue of infeasibility, which is of particular interest in systems governed by the laws of physics.
translated by 谷歌翻译
The extragradient method has recently gained increasing attention, due to its convergence behavior on smooth games. In $n$-player differentiable games, the eigenvalues of the Jacobian of the vector field are distributed on the complex plane, exhibiting more convoluted dynamics compared to classical (i.e., single player) minimization. In this work, we take a polynomial-based analysis of the extragradient with momentum for optimizing games with \emph{cross-shaped} Jacobian spectrum on the complex plane. We show two results. First, based on the hyperparameter setup, the extragradient with momentum exhibits three different modes of convergence: when the eigenvalues are distributed $i)$ on the real line, $ii)$ both on the real line along with complex conjugates, and $iii)$ only as complex conjugates. Then, we focus on the case $ii)$, i.e., when the eigenvalues of the Jacobian have \emph{cross-shaped} structure, as observed in training generative adversarial networks. For this problem class, we derive the optimal hyperparameters of the momentum extragradient method, and show that it achieves an accelerated convergence rate.
translated by 谷歌翻译
The ability to dynamically adapt neural networks to newly-available data without performance deterioration would revolutionize deep learning applications. Streaming learning (i.e., learning from one data example at a time) has the potential to enable such real-time adaptation, but current approaches i) freeze a majority of network parameters during streaming and ii) are dependent upon offline, base initialization procedures over large subsets of data, which damages performance and limits applicability. To mitigate these shortcomings, we propose Cold Start Streaming Learning (CSSL), a simple, end-to-end approach for streaming learning with deep networks that uses a combination of replay and data augmentation to avoid catastrophic forgetting. Because CSSL updates all model parameters during streaming, the algorithm is capable of beginning streaming from a random initialization, making base initialization optional. Going further, the algorithm's simplicity allows theoretical convergence guarantees to be derived using analysis of the Neural Tangent Random Feature (NTRF). In experiments, we find that CSSL outperforms existing baselines for streaming learning in experiments on CIFAR100, ImageNet, and Core50 datasets. Additionally, we propose a novel multi-task streaming learning setting and show that CSSL performs favorably in this domain. Put simply, CSSL performs well and demonstrates that the complicated, multi-step training pipelines adopted by most streaming methodologies can be replaced with a simple, end-to-end learning approach without sacrificing performance.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have achieved great successes in many learning tasks performed on graph structures. Nonetheless, to propagate information GNNs rely on a message passing scheme which can become prohibitively expensive when working with industrial-scale graphs. Inspired by the PPRGo model, we propose the CorePPR model, a scalable solution that utilises a learnable convex combination of the approximate personalised PageRank and the CoreRank to diffuse multi-hop neighbourhood information in GNNs. Additionally, we incorporate a dynamic mechanism to select the most influential neighbours for a particular node which reduces training time while preserving the performance of the model. Overall, we demonstrate that CorePPR outperforms PPRGo, particularly on large graphs where selecting the most influential nodes is particularly relevant for scalability. Our code is publicly available at: https://github.com/arielramos97/CorePPR.
translated by 谷歌翻译
我们研究保形预测的鲁棒性,这是标记噪声的不确定性定量的强大工具。我们的分析解决了回归和分类问题,表征了何时以及如何构建正确覆盖未观察到的无噪音地面真相标签的不确定性集。通过风格化的理论示例和实际实验,我们认为天真的保形预测涵盖了无噪声的地面真相标签,除非噪声分布是对手设计的。这使我们相信,除了病理数据分布或噪声源外,对标签噪声的纠正是不必要的。在这种情况下,我们还可以在保形预测算法中校正有界大小的噪声,以确保在没有得分或数据规律性的情况下正确覆盖地面真相标签。
translated by 谷歌翻译
近年来,通过编码签名距离的神经网络的隐式表面表示已获得流行,并获得了最先进的结果。但是,与传统的形状表示(例如多边形网格)相反,隐式表示不容易编辑,并且试图解决此问题的现有作品非常有限。在这项工作中,我们提出了第一种通过神经网络表达的签名距离函数有效互动编辑的方法,从而可以自由编辑。受到网格雕刻软件的启发,我们使用了一个基于刷子的框架,该框架是直观的,将来可以由雕塑家和数字艺术家使用。为了定位所需的表面变形,我们通过使用其副本来调节网络来采样先前表达的表面。我们引入了一个新型框架,用于模拟雕刻风格的表面编辑,并结合交互式表面采样和网络重量的有效适应。我们在各种不同的3D对象和许多不同的编辑下进行定性和定量评估我们的方法。报告的结果清楚地表明,我们的方法在实现所需的编辑方面产生了很高的精度,同时保留了交互区域之外的几何形状。
translated by 谷歌翻译
在本文中,我们评估了域转移对训练集外部数据外的数据的培训的人类检测模型的影响领域。具体而言,我们使用Robotti平台在农业机器人应用程序的背景下收集的现场数据集中介绍了Opendr人类,从而可以定量测量此类应用程序中域移动的影响。此外,我们通过评估有关训练数据的三种不同的情况来研究手动注释的重要性:a)仅消极样本,即没有描绘的人,b)仅阳性样本,即仅包含人类的图像,而c)既负面c)。和阳性样品。我们的结果表明,即使仅使用负样本,即使对训练过程进行了额外的考虑,也可以达到良好的性能。我们还发现,阳性样品会提高性能,尤其是在更好的本地化方面。该数据集可在https://github.com/opendr-eu/datasets上公开下载。
translated by 谷歌翻译